Goto

Collaborating Authors

 time-contrastive learning and nonlinear ica


Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA

Neural Information Processing Systems

Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable. Here, we first propose a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data. Our learning principle, time-contrastive learning (TCL), finds a representation which allows optimal discrimination of time segments (windows). Surprisingly, we show how TCL can be related to a nonlinear ICA model, when ICA is redefined to include temporal nonstationarities. In particular, we show that TCL combined with linear ICA estimates the nonlinear ICA model up to point-wise transformations of the sources, and this solution is unique --- thus providing the first identifiability result for nonlinear ICA which is rigorous, constructive, as well as very general.


Reviews: Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA

Neural Information Processing Systems

There has been a lot of progress taking the successes of supervised techniques and extending them to unsupervised domains by defining an interesting label to predict, and this paper continues this trend by highlighting the task of discriminating between time windows. Furthermore, I think the emphasis on the importance of identifiability for nonlinear ICA and showing how it is solved in this case is a timely contribution. However, these contributions were somewhat muted by other shortcomings of the approach which make me doubt the robustness and generality of the method. This is a significant source of prior knowledge to include in the experiments and makes the comparisons unfair. In particular the comment that "none of the hidden units seem to represent artefacts, in contrast to ICA" rings a bit hollow since it seems that the elimination of artifacts was really achieved through hand-picked choice in the model.


Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA

Hyvarinen, Aapo, Morioka, Hiroshi

Neural Information Processing Systems

Nonlinear independent component analysis (ICA) provides an appealing framework for unsupervised feature learning, but the models proposed so far are not identifiable. Here, we first propose a new intuitive principle of unsupervised deep learning from time series which uses the nonstationary structure of the data. Our learning principle, time-contrastive learning (TCL), finds a representation which allows optimal discrimination of time segments (windows). Surprisingly, we show how TCL can be related to a nonlinear ICA model, when ICA is redefined to include temporal nonstationarities. In particular, we show that TCL combined with linear ICA estimates the nonlinear ICA model up to point-wise transformations of the sources, and this solution is unique --- thus providing the first identifiability result for nonlinear ICA which is rigorous, constructive, as well as very general.